7 research outputs found

    Implementation and Analysis of an Image-Based Global Illumination Framework for Animated Environments

    Get PDF
    We describe a new framework for efficiently computing and storing global illumination effects for complex, animated environments. The new framework allows the rapid generation of sequences representing any arbitrary path in a view space within an environment in which both the viewer and objects move. The global illumination is stored as time sequences of range-images at base locations that span the view space. We present algorithms for determining locations for these base images, and the time steps required to adequately capture the effects of object motion. We also present algorithms for computing the global illumination in the base images that exploit spatial and temporal coherence by considering direct and indirect illumination separately. We discuss an initial implementation using the new framework. Results and analysis of our implementation demonstrate the effectiveness of the individual phases of the approach; we conclude with an application of the complete framework to a complex environment that includes object motion

    Texture Resampling While Ray-Tracing: Approximating the Convolution Region Using Caching

    Get PDF
    We present a cache-based approach to handling the difficult problem of performing visually acceptable texture resampling/filtering while ray-tracing. While many good methods have been proposed to handle the error introduced by the ray-tracing algorithm when sampling in screen space, handling this error in texture space has been less adequately addressed. Our solution is to introduce the Convolution Mask Approximation Module (CMAM). The CMAM locally approximates the convolution region in texture space as a set of overlapping texture triangles by using a texture sample caching system and ray tagging. Since the caching mechanism is hidden within the CMAM, the ray-tracing algorithm itself is unchanged while achieving an adequate level of texture filtering (area sampling as opposed to point sampling/interpolation in texture space). The CMAM is easily adapted to incorporate prefiltering methods such as MIP mapping and summed-area tables as well as direct convolution methods such as elliptical weighted average filtering

    AN IMAGE-BASED FRAMEWORK FOR GLOBAL ILLUMINATION IN ANIMATED ENVIRONMENTS

    Get PDF
    Interacting with environments exhibiting the subtle lighting effects found in the real world gives designers a better understanding of the scene\u27s structure by providing rich visual cues. The major hurdle is that global illumination algorithms are too inefficient to quickly compute their solutions for reasonably sized environments. When motion is allowed within the environment, the problem becomes even more intractable. We address the problem of sampling and reconstructing an environment\u27s time-varying radiance distribution, its spatio-temporal global illumination information, allowing the efficient generation of arbitrary views of the environment at arbitrary points in time. The radiance distribution formalizes incoming chromatic radiance at all points within a constrained view space, along all directions, at all times. Since these distributions cannot, in general, be calculated analytically, we introduce a framework for specifying and computing sample values from the distribution and progress through a series of sample-based approximations designed to allow easy and accurate reconstruction of images extracted from the distribution. The first approximation is based on storing time-sequences of images at strategic locations within the chosen view space. An image of the environment is constructed by first blending the images contained in the individual time-sequences to get the desired time and then using view interpolation to merge the proximate views. The results presented here demonstrate the feasibility and utility of the method but also show its major drawback. An inability to accurately model the temporal radiance variations using image sequences without resorting to a high sampling rate leads to the replacement of the image sequences by a sparse temporal image volume representation for storing randomly, or adaptively, placed radiance samples. Triangulation techniques are then used to reconstruct the radiance distribution at the desired time from a proximate set of stored spatio-temporal radiance samples. The results presented here show that temporal image volumes allow for more accurate and efficient temporal reconstruction requiring less sampling than the more traditional time-sequence approach

    Rendering spaces for architectural environments

    No full text
    We present a new framework for rendering virtual environments. This framework is proposed as a complete scene description, which embodies the space of all possible renderings, under all possible lighting scenarios of the given scene. In effect, this hypothetical rendering space includes all possible light sources as part of the geometric model. While it would be impractical to implement the general framework, this approach does allow us to look at the rendering problem in a new way. Thus, we propose new representations that are subspaces of the entire rendering space. Some of these subspaces are computationally tractable and may be carefully chosen to serve a particular application. The approach is useful both for real and virtual scenes. The framework includes methods for rendering environments which are illuminated by artificial light, natural light, or a combination of the two models.

    Rendering Spaces for Architectural Environments

    Get PDF
    We present a new framework for rendering virtual environments. This framework is proposed as a complete scene description, which embodies the space of all possible renderings, under all possible lighting scenarios of the given scene. In effect, this hypothetical rendering space includes all possible light sources as part of the geometric model. While it would be impractical to implement the general framework, this approach does allow us to look at the rendering problem in a new way. Thus, we propose new representations that are subspaces of the entire rendering space. Some of these subspaces are computationally tractable and may be carefully chosen to serve a particular application. The approach is useful both for real and virtual scenes. The framework includes methods for rendering environments which are illuminated by artificial light, natural light, or a combination of the two models. 1 Introduction The emerging field of visualization in computer science combines calculations with c..
    corecore